Goto

Collaborating Authors

 holographic image


Automated Pollen Recognition in Optical and Holographic Microscopy Images

Warshaneyan, Swarn Singh, Ivanovs, Maksims, Cugmas, Blaž, Bērziņa, Inese, Goldberga, Laura, Tamosiunas, Mindaugas, Kadiķis, Roberts

arXiv.org Machine Learning

Abstract--This study explores the application of deep learning to improve and automate pollen grain detection and classification in both optical and holographic microscopy images, with a particular focus on veterinary cytology use cases. We used YOLOv8s for object detection and MobileNetV3L for the classification task, evaluating their performance across imaging modalities. The models achieved 91.3% mAP50 for detection and 97% overall accuracy for classification on optical images, whereas the initial performance on greyscale holographic images was substantially lower . We addressed the performance gap issue through dataset expansion using automated labeling and bounding box area enlargement. These techniques, applied to holographic images, improved detection performance from 2.49% to 13.3% mAP50 and classification performance from 42% to 54%. Our work demonstrates that, at least for image classification tasks, it is possible to pair deep learning techniques with cost-effective lensless digital holographic microscopy devices. I. INTRODUCTION Microscopy is an integral part of most veterinary medicine diagnostic procedures.


AI-Augmented Pollen Recognition in Optical and Holographic Microscopy for Veterinary Imaging

Warshaneyan, Swarn S., Ivanovs, Maksims, Cugmas, Blaž, Bērziņa, Inese, Goldberga, Laura, Tamosiunas, Mindaugas, Kadiķis, Roberts

arXiv.org Machine Learning

We present a comprehensive study on fully automated pollen recognition across both conventional optical and digital in-line holographic microscopy (DIHM) images of sample slides. Visually recognizing pollen in unreconstructed holographic images remains challenging due to speckle noise, twin-image artifacts and substantial divergence from bright-field appearances. We establish the performance baseline by training YOLOv8s for object detection and MobileNetV3L for classification on a dual-modality dataset of automatically annotated optical and affinely aligned DIHM images. On optical data, detection mAP50 reaches 91.3% and classification accuracy reaches 97%, whereas on DIHM data, we achieve only 8.15% for detection mAP50 and 50% for classification accuracy. Expanding the bounding boxes of pollens in DIHM images over those acquired in aligned optical images achieves 13.3% for detection mAP50 and 54% for classification accuracy. To improve object detection in DIHM images, we employ a Wasserstein GAN with spectral normalization (WGAN-SN) to create synthetic DIHM images, yielding an FID score of 58.246. Mixing real-world and synthetic data at the 1.0 : 1.5 ratio for DIHM images improves object detection up to 15.4%. These results demonstrate that GAN-based augmentation can reduce the performance divide, bringing fully automated DIHM workflows for veterinary imaging a small but important step closer to practice.


Self-Supervised and Few-Shot Learning for Robust Bioaerosol Monitoring

Willi, Adrian, Baumann, Pascal, Erb, Sophie, Gröger, Fabian, Zeder, Yanick, Lionetti, Simone

arXiv.org Artificial Intelligence

Real-time bioaerosol monitoring is improving the quality of life for people affected by allergies, but it often relies on deep-learning models which pose challenges for widespread adoption. These models are typically trained in a supervised fashion and require considerable effort to produce large amounts of annotated data, an effort that must be repeated for new particles, geographical regions, or measurement systems. In this work, we show that self-supervised learning and few-shot learning can be combined to classify holographic images of bioaerosol particles using a large collection of unlabelled data and only a few examples for each particle type. We first demonstrate that self-supervision on pictures of unidentified particles from ambient air measurements enhances identification even when labelled data is abundant. Most importantly, it greatly improves few-shot classification when only a handful of labelled images are available. Our findings suggest that real-time bioaerosol monitoring workflows can be substantially optimized, and the effort required to adapt models for different situations considerably reduced.


Virtual impactor-based label-free bio-aerosol detection using holography and deep learning

Luo, Yi, Zhang, Yijie, Liu, Tairan, Yu, Alan, Wu, Yichen, Ozcan, Aydogan

arXiv.org Artificial Intelligence

Exposure to bio-aerosols such as mold spores and pollen can lead to adverse health effects. There is a need for a portable and cost-effective device for long-term monitoring and quantification of various bio-aerosols. To address this need, we present a mobile and cost-effective label-free bio-aerosol sensor that takes holographic images of flowing particulate matter concentrated by a virtual impactor, which selectively slows down and guides particles larger than ~6 microns to fly through an imaging window. The flowing particles are illuminated by a pulsed laser diode, casting their inline holograms on a CMOS image sensor in a lens-free mobile imaging device. The illumination contains three short pulses with a negligible shift of the flowing particle within one pulse, and triplicate holograms of the same particle are recorded at a single frame before it exits the imaging field-of-view, revealing different perspectives of each particle. The particles within the virtual impactor are localized through a differential detection scheme, and a deep neural network classifies the aerosol type in a label-free manner, based on the acquired holographic images. We demonstrated the success of this mobile bio-aerosol detector with a virtual impactor using different types of pollen (i.e., bermuda, elm, oak, pine, sycamore, and wheat) and achieved a blind classification accuracy of 92.91%. This mobile and cost-effective device weighs ~700 g and can be used for label-free sensing and quantification of various bio-aerosols over extended periods since it is based on a cartridge-free virtual impactor that does not capture or immobilize particulate matter.


Microplankton life histories revealed by holographic microscopy and deep learning

Bachimanchi, Harshith, Midtvedt, Benjamin, Midtvedt, Daniel, Selander, Erik, Volpe, Giovanni

arXiv.org Artificial Intelligence

Department of Marine Sciences, University of Gothenburg, Sweden (Dated: February 21, 2022) The marine microbial food web plays a central role in the global carbon cycle. Our mechanistic understanding of the ocean, however, is biased towards its larger constituents, while rates and biomass fluxes in the microbial food web are mainly inferred from indirect measurements and ensemble averages. Yet, resolution at the level of the individual microplankton is required to advance our understanding of the oceanic food web. Here, we demonstrate that, by combining holographic microscopy with deep learning, we can follow microplanktons throughout their lifespan, continuously measuring their three dimensional position and dry mass. The deep learning algorithms circumvent the computationally intensive processing of holographic data and allow rapid measurements over extended time periods. This permits us to reliably estimate growth rates, both in terms of dry mass increase and cell divisions, as well as to measure trophic interactions between species such as predation events. The individual resolution provides information about selectivity, individual feeding rates and handling times for individual microplanktons. This method is particularly useful to explore the flux of carbon through micro-zooplankton, the most important and least known group of primary consumers in the global oceans. Moreover, indirect measurements is well established in terrestrial ecology.


New AI Enables Rapid Detection of Harmful Bacteria

#artificialintelligence

Testing for pathogens is a critical component of maintaining public health and safety. Having a method to rapidly and reliably test for harmful germs is essential for diagnosing diseases, maintaining clean drinking water, regulating food safety, conducting scientific research, and other important functions of modern society. In recent research, scientists from University of California, Los Angeles (UCLA), have demonstrated that artificial intelligence (AI) can detect harmful bacteria from a water sample up to 12 hours faster than the current gold-standard Environmental Protection Agency (EPA) methods. In a new study published yesterday in Light: Science and Applications, the researchers created a time-lapse imaging platform that uses two separate deep neural networks (DNNs) for the detection and classification of bacteria. The team tested the high-throughput bacterial colony growth detection and classification system using water suspensions with added coliform bacteria of E. coli (including chlorine-stressed E. coli), K. pneumoniae and K. aerogenes, grown on chromogenic agar as the culture medium.


Three-dimensional vectorial holography based on machine learning inverse design

#artificialintelligence

The three-dimensional (3D) vectorial nature of electromagnetic waves of light has not only played a fundamental role in science but also driven disruptive applications in optical display, microscopy, and manipulation. However, conventional optical holography can address only the amplitude and phase information of an optical beam, leaving the 3D vectorial feature of light completely inaccessible. We demonstrate 3D vectorial holography where an arbitrary 3D vectorial field distribution on a wavefront can be precisely reconstructed using the machine learning inverse design based on multilayer perceptron artificial neural networks. This 3D vectorial holography allows the lensless reconstruction of a 3D vectorial holographic image with an ultrawide viewing angle of 94 and a high diffraction efficiency of 78%, necessary for floating displays. The results provide an artificial intelligence–enabled holographic paradigm for harnessing the vectorial nature of light, enabling new machine learning strategies for holographic 3D vectorial fields multiplexing in display and encryption. Since its invention by Gabor (1), optical holography, which allows the reconstruction of both the amplitude and phase information of a three-dimensional (3D) image of an object, has propelled many advanced technologies including optical display (2–5), data storage (6, 7), optical trapping (8), holographic fabrication (9), pattern recognition (10), artificial neural networks (11), and all-optical machine learning (12).


How AI Could Track Allergens on Every Block NVIDIA Blog

#artificialintelligence

As seasonal allergy sufferers will attest, the concentration of allergens in the air varies every few paces. A nearby blossoming tree or sudden gust of pollen-tinged wind can easily set off sneezing and watery eyes. But concentrations of airborne allergens are reported city by city, at best. A network of deep learning-powered devices could change that, enabling scientists to track pollen density block by block. Researchers at the University of California, Los Angeles, have developed a portable AI device that identifies levels of five common allergens from pollen and mold spores with 94 percent accuracy, according to the team's recent paper.


Custom-made smart glasses pick up where Google Glass left off

Engadget

Earlier this month, Thalmic Labs announced it would be ending the production of Myo, a gesture-controlled armband that it's been developing for the past few years. The company has decided to shift focus to an entirely different project. Today, it's finally ready to reveal what that project is. It's called Focals, a pair of smart glasses that uses holographic display technology. "Focals are a pair of everyday smart glasses that are designed from the eyewear-first perspective," Stephen Lake, North's CEO and co-founder, told Engadget.


Address the Consequences of AI in Advance

Communications of the ACM

The viewpoints by Alan Bundy "Smart Machines Are Not a Threat to Humanity" and Devdatt Dubhashi and Shalom Lappin "AI Dangers: Imagined and Real" (both Feb. 2017) argued against the possibility of a near-term singularity wherein super-intelligent AIs exceed human capabilities and control. Both relied heavily on the lack of direct relevance of Moore's Law, noting raw computing power does not by itself lead to human-like intelligence. Bundy also emphasized the difference between a computer's efficiency in working an algorithm to solve a narrow, well-defined problem and human-like generalized problem-solving ability. Dubhashi and Lappin noted incremental progress in machine learning or better knowledge of a biological brain's wiring do not automatically lead to the "unanticipated spurts" of progress that characterize scientific breakthroughs. These points are valid, but a more accurate characterization of the situation is that computer science may well be just one conceptual breakthrough away from being able to build an artificial general intelligence.